Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Hardware

NVIDIA Predicts 570x GPU Performance Boost 295

Gianna Borgnine writes "NVIDIA is predicting that GPU performance is going to increase a whopping 570-fold in the next six years. According to TG Daily, NVIDIA CEO Jen-Hsun Huang made the prediction at this year's Hot Chips symposium. Huang claimed that while the performance of GPU silicon is heading for a monumental increase in the next six years — making it 570 times faster than the products available today — CPU technology will find itself lagging behind, increasing to a mere 3 times current performance levels. 'Huang also discussed a number of "real-world" GPU applications, including energy exploration, interactive ray tracing and CGI simulations.'"
This discussion has been archived. No new comments can be posted.

NVIDIA Predicts 570x GPU Performance Boost

Comments Filter:
  • Goody! (Score:5, Funny)

    by Anonymous Coward on Friday August 28, 2009 @04:19PM (#29236273)
    Then we can use our GPUs as our CPUs!
  • I see a few tags that cast doubt on the prediction. Why? I'll bet there were skeptics of Moore's Law when that became widely disseminated.

    What troubles me is that this sort of cell GPU is not more widely used in everyday applications. We who program for a living are feeling like we have been engaging in 'self stimulation' for years and wish there were some new target platform/market that we could so some interesting work in.
    • by TheRealMindChild ( 743925 ) on Friday August 28, 2009 @04:30PM (#29236425) Homepage Journal
      Well, it comes down to simple math. For the performance to get to 570-fold more than what it is now, in the same style package, either:
      1. The GPU has to become 570-fold more efficient
      2. The GPU has to become ~570-fold smaller so they can fit 570 of the things onto a card

      Both seem highly unlikely.

      • by LoudMusic ( 199347 ) on Friday August 28, 2009 @04:50PM (#29236693)

        Well, it comes down to simple math. For the performance to get to 570-fold more than what it is now, in the same style package, either:

        1. The GPU has to become 570-fold more efficient
        2. The GPU has to become ~570-fold smaller so they can fit 570 of the things onto a card

        Both seem highly unlikely.

        You don't feel it could be a combination of both? Kind of like they did with multi-core CPUs? Make a single unit more powerful, then use more units ... wow!

        There is more than one way to skin a cat.

        • by Twinbee ( 767046 )

          Or 3d-erize the chip?

      • Even more curious, he claims that GPUs will see 570x improvement, with CPUs only getting 3x.

        One wonders what night-miraculous improvement in process, packaging, logic design, etc. will improve GPUs by hundreds of times, while somehow being virtually useless for CPUs...
        • Re: (Score:2, Funny)

          We'll obviously need a Turbo button on it set in the off position until such time that the CPUs catch up.
      • by BikeHelmet ( 1437881 ) on Friday August 28, 2009 @05:07PM (#29236911) Journal

        Or... not.

        Currently CPUs and GPUs are stamped together. Basically, they take a bunch of pre-made blocks of transistors(millions of blocks, billions of transistors in a GPU), and etch those into the silicon, and out comes a working GPU.

        It's easy - relatively speaking - and doesn't require a huge amount of redesign between generations. When you get a certain combination working, you improve (shrink) your nanometre process and add more blocks.

        However, compiler technology has advanced a lot recently, and with the vast amounts of processing power now available, it should be simpler getting more complex blocks fully utilized. A vastly more complex block, with interconnects to many other blocks, could perform better at a swath of different tasks. This is evident when comparing the performance hit from Anti-Aliasing. Previously even 2xAA had a huge performance hit, but nVidia altered their designs, and now Multisampling AA is basically free.

        I recall seeing an article about a new kind of shadowing that was going to be used in DX11 games. The card used for the review got almost 200fps at high settings - with AA enabled that dropped to about 60fps, and with the new shadowing enabled, it dropped to about 20fps. It appears the hardware needs a redesign to be more optimized for whatever algorithm it uses!

        Two other factors you're forgetting...

        1) 3D CPU/GPU designs are coming slowly, where the transistors aren't just on a 2D plane... that would allow vastly denser CPUs and GPUs. If a processor had minimal leakage, and low power consumption, 500x more transistors wouldn't be a stretch.

        2) Performance claims are merely claims. Intel claims a quad-core gives 4x more performance, but in many cases it's slower than a faster dual-core.

        570x faster for every game? Doubtful. 570x faster at the most advanced rendering techniques being designed today, with AA and other memory-bandwidth hammering features ramped to the max? Might be accurate. A high end GPU from 6 years ago probably won't get 1fps on a modern game, so this estimate might even be low.

        A claim of 250x the framerate in Crysis, with everything ramped to the absolute maximum, might be even accurate.

        But general performance claims are almost never true.

      • by Ant P. ( 974313 ) on Friday August 28, 2009 @05:57PM (#29237485)

        1. The GPU has to become 570-fold more efficient
              2. The GPU has to become ~570-fold smaller so they can fit 570 of the things onto a card

        Both seem highly unlikely.

        If graphics card development in the last 10 years is anything to go by, nVidia's plan is that the GPU will become 570 times larger, draw 570 times more power and the fan will spin 570 times faster

      • Re: (Score:3, Informative)

        by tyrione ( 134248 )

        Well, it comes down to simple math. For the performance to get to 570-fold more than what it is now, in the same style package, either:

        1. The GPU has to become 570-fold more efficient
        2. The GPU has to become ~570-fold smaller so they can fit 570 of the things onto a card

        Both seem highly unlikely.

        It's not a linear relationship.

      • Re: (Score:3, Funny)

        by Lord Ender ( 156273 )

        Either

        • your post contains a false dichotomy, or
        • Angelina Jolie is giving me a blowjob right now

        Neither seem highly unlikely.

    • by eln ( 21727 ) on Friday August 28, 2009 @04:30PM (#29236429)
      I don't doubt the prediction at all, I just have concerns about the vat of liquid nitrogen I'm going to have to immerse my computer in to keep that thing from overheating, and the power substation I'm going to need to build in my backyard to power it.
      • I don't doubt the prediction at all, I just have concerns about the vat of liquid nitrogen I'm going to have to immerse my computer in to keep that thing from overheating, and the power substation I'm going to need to build in my backyard to power it.

        But GPUs today are somewhat more than 570x more powerful than they were several years ago and we haven't had to submerge them in a vat of liquid nitrogen yet, so what makes you think that's going to be the case in the next 570x power increase? (whenever that happens ...)

        • by eln ( 21727 )
          Maybe not, but they do require a lot more cooling and power than they did before.
          • Perhaps for the top end models that holds true, but as the market for roll your own HTPC's has shown (at least in terms of cooling), there are plenty of passive heat sink options available.
          • Re: (Score:2, Insightful)

            by melf-san ( 1504607 )
            Maybe the high-end ones, but the low-end GPUs are mostly passively cooled and still much more powerful than old GPUs.
    • He constantly runs his mouth without any real thought to what he's saying. It's just attention whoring.

      • Re: (Score:2, Flamebait)

        by vivek7006 ( 585218 )

        Mod parent up.

        Jen-Hsun Huang is a certified clown who just a short while back was running around saying things like 'we will open a can of whoop-ass on Intel'.

        What a dumbass ...

      • It's even more bullshity than normal since he's also evidently predicting the end of Moore's law. CPUs only improving by 3x in 6 years?!

        6 years/1.5 years = 4 Cycles of Moore's law.

        2^4 = 16x performance increase.

        So I guess Moore's law in the next year is going to go from a doubling every 18 months to a doubling every 4-5 years? When did that happen?

    • Re: (Score:3, Interesting)

      by javaman235 ( 461502 )

      Its easy to get a 570x increase with parallel cores. You will just have a GPU that is 570 times bigger, costs 570 times more and consumes 570 times more energy. As far as any kind of real break through though, I'm not seeing it from the information at hand.

      There is something worthy of note in all this though, which is that the new way of doing business is through massive parallelism. We've all known this was coming for a long time, but its officially here.

    • by Anonymous Coward on Friday August 28, 2009 @04:39PM (#29236547)

      The prediction is complete nonsense. It assumes that CPU processors only get 20% faster per year (compounded). That would only be true if they did not add more cores to the CPU. And finally GPUs are hitting the same thermal/power leakage wall that CPUs hit several years ago - they will at best get faster in lock step with CPUs.

      A GPU is not a general purpose processor, as is a CPU. It is only good at performing a large number of repetitive single precision (32 bit) floating point calculations without branching. Double precision (64 bit) calculations - double in C speak - is 4 times slower than single precision on a GPU. And the second you have an "if" in GPU code, everything grinds to a halt. Conditions effectively break the GPU SIMD (single instruction multiple data) model and bring the pipeline to a halt.

      • Re: (Score:3, Insightful)

        by ceoyoyo ( 59147 )

        "It assumes that CPU processors only get 20% faster per year (compounded). That would only be true if they did not add more cores to the CPU."

        "It is only good at performing a large number of repetitive single precision (32 bit) floating point calculations without branching."

        If we wanted a 64-bit GPU it would be easy enough to make. GPUs used to do weird mixes of integer and floating point math until the manufacturers made an effort to guarantee 32-bit precision throughout. That leaves the branching part o

    • by geekoid ( 135745 )

      Moore's law is ending. The fab issues at the scale can be costly. Moore's law is about cost, not speed.

  • In other news... (Score:5, Informative)

    by Hadlock ( 143607 ) on Friday August 28, 2009 @04:22PM (#29236317) Homepage Journal

    In other news, ATI is selling their 4870 series cards for $130 on newegg, which are twice as fast as an Nvidia 9800GTS which is the same price (at least on Left 4 Dead, Call of Duty, and any other game that matters). ATI is blowing Nvidia out of the water in terms of performance per dollar and will continue to do so through at least the middle of next year. See here:

    http://www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/benchmarks,62.html [tomshardware.com]

    Yeah, I'd be making outrageous statements too if I were Nvidia.

    • Re: (Score:2, Informative)

      by Hadlock ( 143607 )

      Here's the L4D comparo, sorry for the wrong link:

      http://www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/Left4Dead,1455.html [tomshardware.com]

      The 9800GT and 8800GT are both in the 40-60fps while the 4870 (single processor) is in the 106fps range. It's a pretty staggering difference.

      • by JustNiz ( 692889 )

        Yeah and the 280 is in the 120fps range. Whats your point?

        • The GTX 280 goes for $240 on NewEgg [newegg.com]. Around a 15 FPS difference for $100.
      • Re: (Score:3, Insightful)

        by fractoid ( 1076465 )
        WTF Mods. He's just saying that at this price point you can get nearly double the performance from ATI than from nVidia. I love nVidia too, I run a 9800GT, but I'm not going to mod someone troll for pointing out that something else is now faster and cheaper.
    • Re: (Score:3, Informative)

      by Spatial ( 1235392 )
      Troll mod? No, this is mostly true.

      While his example is wrong (Nvidia's competitor to the HD4870 is the GTX 260 c216), AMD do have better value for money on their side. The HD4870 is evenly matched but a good bit cheaper.

      The situation is similar in the CPU domain. The Phenom IIs are slightly slower per-clock than the Core 2s they compete with, but are considerably cheaper.
      • Re:In other news... (Score:4, Informative)

        by MrBandersnatch ( 544818 ) on Friday August 28, 2009 @05:24PM (#29237113)

        Depending on vendor it is now possible to get a 275 less than a 4890 and a 260 for only slightly more than a 4870; at lower prices its very competitive too. My point is that both NV and ATI are on pretty level ground again and the ONLY reason I now choose NV over ATI is because of the superior NV drivers (both Linux and Windows side)...oh and the fact that ATI pulled a fast one on me with their AVIVO performance claims. Shame on you ATI!

    • Re:In other news... (Score:4, Interesting)

      by TeXMaster ( 593524 ) on Friday August 28, 2009 @05:24PM (#29237109)

      In other news, ATI is selling their 4870 series cards for $130 on newegg, which are twice as fast as an Nvidia 9800GTS which is the same price (at least on Left 4 Dead, Call of Duty, and any other game that matters). ATI is blowing Nvidia out of the water in terms of performance per dollar and will continue to do so through at least the middle of next year. See here:

      http://www.tomshardware.com/charts/gaming-graphics-cards-charts-2009-high-quality/benchmarks,62.html [tomshardware.com]

      Yeah, I'd be making outrageous statements too if I were Nvidia.

      Even when it comes to GPGPU (General Purpose computing on the GPU), ATI's hardware is much better than NVIDIA's. However, the programming interfaces for ATI suck big times, whereas NVIDIA's CUDA is much more comfortable to code for, and it has an extensive range of documentation and examples that provide developers with all they need to improve their NVIDIA GPGPU programming. It also has much more aggressive marketing.

      As a sad result, NVIDIA is often the platform of choice for GPU usage for HPC, despite it having inferior hardware. And I doubt OpenCL is going to fix this, since it basically standardizes the low-level API, keeping NVIDIA with its superior high-level API.

      • In addition to VDPAU enabled mplayer, I can actually FIND CUDA enabled apps. There's CUDA enabled md5 crackers, cuda enabled BOINC, Matlab has a CUDA plugin. I'm considering buying CUDA compatible card so I can install it at work just to play with it in Matlab.

    • Re: (Score:3, Insightful)

      by 7-Vodka ( 195504 )
      And their linux drivers still SUCK.
      • Re: (Score:3, Insightful)

        I agree. I recently bought a laptop with an ATI card and the biggest reason why I did that is because I heard they went open source. I was disappointed by the fact that their latest Catalyst driver doesn't work well on Ubuntu 9.04. The one recommended by Ubuntu works but it's VERY slow when restoring a window in Compiz. All in all it feels like a downgrade compared to my Intel integrated graphics card. Sigh. :(

  • But how? (Score:3, Insightful)

    by Anonymous Coward on Friday August 28, 2009 @04:24PM (#29236333)

    I read the article, but I don't see any explanation of how exactly that performance increase will come about. Nor is there any explanation of why GPUs will see the increase but CPUs will not. Anyone have a better article on the matter?

  • Good to know! (Score:5, Insightful)

    by CopaceticOpus ( 965603 ) on Friday August 28, 2009 @04:30PM (#29236431)

    Thanks for the heads up, Nvidia! I'll be sure to hold off for 6 years on buying anything with a GPU.

    • That was my immediate thought as well.

      We're about to drop $250K on a GPU cluster and if performance increases to that amount in 6 years, why on earth would we buy now?

      Dammit, there's just no win when you fork out for clusters (of any kind).

      Should spend 50K now, stick the rest into stocks and buy 50K every year. Of course, the dudes up the tree don't like that kind of thinking.

  • So... (Score:3, Funny)

    by XPeter ( 1429763 ) * on Friday August 28, 2009 @04:31PM (#29236439) Homepage

    I have to wait six years to play Crysis?

  • by Yurka ( 468420 )

    But they'd better hurry up with "Mr. Fusion" which will be needed to power that thing, and finally buy the license for that demon of Mr. Maxwell's to cool it.

  • Seriously, it's so easy to give ambiguous figures then they can't be held to it.
  • > 'Huang also discussed a number of "real-world" GPU applications, including energy exploration, interactive ray tracing and CGI simulations.'"

    Add to that 'MD5 collisions etc"

    GPU coding really is going to separate the men from the boys. I sense a return to the old days, where people had to think about coding, and where brilliant discoveries were made.
    ( like this: http://en.wikipedia.org/wiki/HAKMEM [wikipedia.org] )

    Darn, pity I'm too old now. I'll have a play though...

  • 6 years = 72 months

    Moore's Law states a doubling in transistors (but we'll call it performance) at every 18 month interval, so:

    72/18 = 4 Moore cycles

    2^4 = 32

    So in six years, Gordon Moore says we should have 32x the performance we have now.

    But it's indeed interesting... Silicon was a much easier-to-predict medium in the 20th Century. And yet here we have these two mature, opposing approaches to silicon-based computing, represented by the CPU and the GPU, with some predicting unprecedented growth for one and

    • by McNihil ( 612243 )

      Or what he is actually saying is that they (nVidia) will have more than 9 generations (~9.15) within 6 years... 1.5 generations/year... which I believe is fairly doable and actually slightly slower than the 6 month release cycle we have been accustomed to since 1998.

      In other words "business as usual"

    • Re:The math (Score:5, Insightful)

      by BikeHelmet ( 1437881 ) on Friday August 28, 2009 @05:13PM (#29236985) Journal

      So in six years, Gordon Moore says we should have 32x the performance we have now.

      No - 32x the transistors.

      You fail to predict how using those transistors in a more optimized way(more suitable to modern rendering algorithms) will affect performance.

      Just think about it - a plain old FPU and SSE4 might use the same number of transistors, but when the code needs to do a lot of fancy stuff at once, one is definitely faster.

      (inaccurate example, but you get the idea)

  • by Entropius ( 188861 ) on Friday August 28, 2009 @04:39PM (#29236541)

    I do high-performance lattice QCD calculations as a grad student. At the moment I'm running code on 2048 Opteron cores, which is about typical for us -- I think the big jobs use 4096 sometimes. We soak up a *lot* of CPU time on some large machines -- hundreds of millions of core-hours -- so making this stuff run faster is something People Care About.

    This sort of problem is very well suited to being put on GPU's, since the simulations are done on a four-dimensional lattice (say 40x40x40x96 -- for technical reasons the time direction is elongated) and since "do this to the whole lattice" is something that can be parallelized easily. The trouble is that the GPU's don't have enough RAM to fit everything into memory (which is understandable, they're huge) and communications between multiple GPU's are slow (since we have to go GPU -> PCI Express -> Infiniband).

    If Nvidia were to make GPU's with extra RAM (could you stuff 16GB on a card?) or a way to connect them to each other by some faster method, they'd make a lot of scientists happy.

    • Perhaps you know my brother? He has been doing a lot of this stuff.

      http://www.google.com/search?client=firefox-a&rls=org.mozilla%3Aen-US%3Aofficial&channel=s&hl=en&source=hp&q=vandesande+qcd&btnG=Google+Search

    • by Chirs ( 87576 )

      Could you grab some motherboards with multiple expansion slots and load them up with dual-gpu boards?

      • by Entropius ( 188861 ) on Friday August 28, 2009 @05:24PM (#29237101)

        You can -- that's what people are trying now. The issue is that in order for the GPU's to communicate, they've got to go over the PCI Express bus to the motherboard, and then via whatever interconnect you use from one motherboard to another.

        I don't know all the details, but the people who have studied this say that PCI Express (or, more specifically, the PCI Express to Infiniband connection) is a serious bottleneck.

    • by ae1294 ( 1547521 )

      If Nvidia were to make GPU's with extra RAM (could you stuff 16GB on a card?) or a way to connect them to each other by some faster method, they'd make a lot of scientists happy.

      Do you really need to ask them to do this for you? I'd think if you are a grad student you might be able to get together with some Electrical Engineering students and rig up something and turn a profit! The only thing you really need to know is how much memory the GPU can address, if you can get a hold of the source for the drivers, etc..

      A video card isn't much more than a GPU with memory soldered on to it...

  • by Dyinobal ( 1427207 ) on Friday August 28, 2009 @04:53PM (#29236729)
    Will I need a separate power supply or two to run these new video cards? or will they include their own fission reactors?
  • by Minwee ( 522556 ) <dcr@neverwhen.org> on Friday August 28, 2009 @05:16PM (#29237025) Homepage

    "Did I mention that our next model is going to be SO amazing that you'll think that our current product is crap? The new model will make EVERYTHING obsolete and the entire world will need to upgrade to it when it comes out. People won't even be able to give away any older products. Sooooo... how many of this year's model will you be buying today?

    "Hello? Are you still there?

    "Hello?"

  • % VS Times (Score:5, Interesting)

    by AmigaHeretic ( 991368 ) on Friday August 28, 2009 @05:38PM (#29237269) Journal
    I'm sure this is just another case of some moron seeing 570% increase and going, WoW! my next GPU will be 570 TIMES faster!!

    For the rest of us of course 570% increase is 5.7X faster.

    So, CPUs increasing 3X in the next 6 years and GPUs increasing 5.7X I can maybe believe.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      So a 100% increase would be 1.0X faster?

    • Re:% VS Times (Score:5, Informative)

      by glwtta ( 532858 ) on Friday August 28, 2009 @08:48PM (#29238737) Homepage
      For the rest of us of course 570% increase is 5.7X faster.

      It seems the rest of us don't understand what a "percent increase" means, either.

      (hint: 570% increase == 6.7X)
  • is another way to say it.
  • by Mursk ( 928595 ) on Friday August 28, 2009 @05:44PM (#29237351)
    I hope it looks like this:

    http://www.russdraper.com/images/fullsize/bitchin_fast_3d.jpg [russdraper.com]
  • Current Nvidia GPUs have 500 gigaflops of performance in single precision. 20 teraflops would be 40 times faster. 570 times faster in 2016 would be 285 teraflops.
  • Yeah, they also always say that batteries are 10 times better. But can I use my phone 50 days instead of 5 days without recharging? No, it's always the same amount of days as 10 years ago.

I cannot conceive that anybody will require multiplications at the rate of 40,000 or even 4,000 per hour ... -- F. H. Wales (1936)

Working...